除了(中)提到的問題以外,使用久了之後,又出現其他問題,有些是在 mongoDB 裡無法解決的,我們就在 application 層級解決。
變大後,首先最需要處理的就是效能問題。此外,還有 disk 大小本身也是用越多花越多錢。
所以要想辦法有效應用,想辦法降低 data 容量。
我們具體有做的是,定期在後端加入判斷,如果出現以後絕對不會說話的對話的配對(例如封鎖對方),我們會刪除來節省空間。
這邊奇怪的是,單純刪除也不會增加容量,一定要跑這個指令才行:db.runCommand({compact: 'collection name'})
而且,跑這個指令,還要按照以下的步驟執行才可以。
	/*
	 * We do compaction by copying blocks from the end of the file to the
	 * beginning of the file, and we need some metrics to decide if it's
	 * worth doing.  Ignore small files, and files where we are unlikely
	 * to recover 10% of the file.
	 */
	/* Sum the available bytes in the initial 80% and 90% of the file. */
	/*
	 * Skip files where we can't recover at least 1MB.
	 *
	 * If at least 20% of the total file is available and in the first 80%
	 * of the file, we'll try compaction on the last 20% of the file; else,
	 * if at least 10% of the total file is available and in the first 90%
	 * of the file, we'll try compaction on the last 10% of the file.
	 *
	 * We could push this further, but there's diminishing returns, a mostly
	 * empty file can be processed quickly, so more aggressive compaction is
	 * less useful.
	 */
這個我們當時找了很久,才解決這個問題。
最新文章會分享在臉書:https://www.facebook.com/gigi.wuwu/
歡迎留言討論